Automatic segmentation of kidney and kidney tumour in Computed Tomography (CT) images is essential, as it uses less time as compared to the current gold standard of manual segmentation. However, many hospitals are still reliant on manual study and segmentation of CT images by medical practitioners because of its higher accuracy. Thus, this study focuses on the development of an approach for automatic kidney and kidney tumour segmentation in contrast-enhanced CT images. A method based on Convolutional Neural Network (CNN) was proposed, where a 3D U-Net segmentation model was developed and trained to delineate the kidney and kidney tumour from CT scans. Each CT image was pre-processed before inputting to the CNN, and the effect of down-sampled and patch-wise input images on the model performance was analysed. The proposed method was evaluated on the publicly available 2021 Kidney and Kidney Tumour Segmentation Challenge (KiTS21) dataset. The method with the best performing model recorded an average training Dice score of 0.6129, with the kidney and kidney tumour Dice scores of 0.7923 and 0.4344, respectively. For testing, the model obtained a kidney Dice score of 0.8034, and a kidney tumour Dice score of 0.4713, with an average Dice score of 0.6374.
translated by 谷歌翻译
Code pre-trained models (CodePTMs) have recently demonstrated significant success in code intelligence. To interpret these models, some probing methods have been applied. However, these methods fail to consider the inherent characteristics of codes. In this paper, to address the problem, we propose a novel probing method CAT-probing to quantitatively interpret how CodePTMs attend code structure. We first denoise the input code sequences based on the token types pre-defined by the compilers to filter those tokens whose attention scores are too small. After that, we define a new metric CAT-score to measure the commonality between the token-level attention scores generated in CodePTMs and the pair-wise distances between corresponding AST nodes. The higher the CAT-score, the stronger the ability of CodePTMs to capture code structure. We conduct extensive experiments to integrate CAT-probing with representative CodePTMs for different programming languages. Experimental results show the effectiveness of CAT-probing in CodePTM interpretation. Our codes and data are publicly available at https://github.com/nchen909/CodeAttention.
translated by 谷歌翻译
尽管在组织中学习即时邮件使用的历史悠久,但我们非常了解今天的人们如何参与群聊频道并与他人互动。在此简短的说明中,我们的目标是更新关于群聊在当今组织的上下文中使用的现有知识。我们的特权在跨国IT公司中的R \&D部门休闲收集了4300个公共可公共团体聊天渠道。通过定性编码100个通道,我们确定了9个频道类别,如项目的通道和事件通道。我们进一步定义了一个具有21个功能的特征度量来描述这些组聊天通道的组通信样式,我们成功培训了机器学习模型,该模型可以自动将给定组通道分类为9个类别之一。此外,我们说明了这些通信度量如何用于分析团队的协作活动。我们专注于117个项目团队,因为我们有其性能数据,并进一步收集了117个团队的Slack组数据中的54个,并为每个人生成了通信风格指标。通过这些数据,我们能够构建回归模型,以揭示这些组通信风格与项目团队性能的一个指标之间的关系。
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译
Multilingual BERT (mBERT) has demonstrated considerable cross-lingual syntactic ability, whereby it enables effective zero-shot cross-lingual transfer of syntactic knowledge. The transfer is more successful between some languages, but it is not well understood what leads to this variation and whether it fairly reflects difference between languages. In this work, we investigate the distributions of grammatical relations induced from mBERT in the context of 24 typologically different languages. We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms. Such difference learnt via self-supervision plays a crucial role in the zero-shot transfer performance and can be predicted by variation in morphosyntactic properties between languages. These results suggest that mBERT properly encodes languages in a way consistent with linguistic diversity and provide insights into the mechanism of cross-lingual transfer.
translated by 谷歌翻译
本文重新讨论了一个非常简单但非常有效的计算范式,深度共同学习(DML)。我们观察到,有效性与其出色的概括质量高度相关。在本文中,我们从新的角度来解释了DML的性能改善,即这大约是贝叶斯后的采样程序。这也为应用R \'{e} nyi Divergence改善原始DML的基础建立了基础,因为它带来了先验的差异控制(在DML的上下文中)。因此,我们提出了r \'{e} nyi Divergence深度共同学习(RDML)。我们的经验结果代表了DML和\ renyi {}差异的婚姻的优势。R \'{E} nyi Divergence施加的灵活控制能够进一步改进DML,以学习更好的广义模型。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
近年来,随着新颖的策略和应用,神经网络一直在迅速扩展。然而,尽管不可避免地会针对关键应用程序来解决这些挑战,例如神经网络技术诸如神经网络技术中仍未解决诸如神经网络技术的挑战。已经尝试通过用符号表示来表示和嵌入域知识来克服神经网络计算中的挑战。因此,出现了神经符号学习(Nesyl)概念,其中结合了符号表示的各个方面,并将常识带入神经网络(Nesyl)。在可解释性,推理和解释性至关重要的领域中,例如视频和图像字幕,提问和推理,健康信息学和基因组学,Nesyl表现出了有希望的结果。这篇综述介绍了一项有关最先进的Nesyl方法的全面调查,其原理,机器和深度学习算法的进步,诸如Opthalmology之类的应用以及最重要的是该新兴领域的未来观点。
translated by 谷歌翻译
在本文中,我们提出了一种新颖的自我监督方法,可以预测未来,未观察到的现实世界中的深度估计。这项工作是第一个探索自我监督的学习,以估计视频未来未观察到的框架的单眼深度。现有作品依靠大量带注释的样本来生成对看不见框架深度的概率预测。但是,由于需要大量注释的视频样本,因此这使它变得不现实。此外,案件的概率性质,其中一个过去可能会有多个未来结果通常会导致深度估计不正确。与以前的方法不同,我们将未观察到的框架的深度估计作为视图合成问题进行建模,该问题将看不见的视频框架的深度估计视为辅助任务,同时使用学识渊博的姿势将视图恢复回去。这种方法不仅具有成本效益 - 我们不使用任何基础真相深度进行培训(因此实用),而且不使用确定性(过去的框架映射到不久的将来)。为了解决此任务,我们首先开发了一个新颖的深度预测网络DEFNET,该深度通过预测潜在特征来估计未观察到的未来的深度。其次,我们开发了基于渠道注意的姿势估计网络,该网络估计未观察到的框架的姿势。使用这个学到的姿势,将估计的深度图重建回图像域,从而形成一个自我监督的解决方案。我们提出的方法在短期和中期预测环境中与最先进的替代方案相比,ABS REL度量的重大改善,在Kitti和CityScapes上标有标准。代码可从https://github.com/sauradip/depthforecasting获得
translated by 谷歌翻译
深神经网络(DNN)是医疗应用中有前途的工具。但是,由于通信的能源成本很高,因此在电池供电设备上实施复杂的DNN是具有挑战性的。在这项工作中,开发了卷积神经网络模型,用于检测心电图(ECG)信号的房颤。该模型表明,尽管接受了有限的可变长度输入数据训练,但表现出了高性能。重量修剪和对数定量合并以引入稀疏性并降低模型大小,可以利用这些稀疏性,以减少数据移动和降低计算复杂性。最终模型达到了91.1%的模型压缩率,同时保持高模型精度为91.7%,损失小于1%。
translated by 谷歌翻译